S.-T. Yau College Student Mathematics Contests 2022
Computational and Applied Mathematics
Solve every problem.
Problem 1. Consider
{
p
i
(
x
)
}
i
=
0
∞
p
i
(
x
)
i
=
0
∞
{p_(i)(x)}_(i=0)^(oo) \left\{p_{i}(x)\right\}_{i=0}^{\infty} { p i ( x ) } i = 0 ∞ , a family of orthogonal polynomials associated with the inner product
⟨
f
,
g
⟩
=
∫
−
1
1
f
(
x
)
g
(
x
)
w
(
x
)
d
x
,
w
(
x
)
>
0
for
x
∈
(
−
1
,
1
)
⟨
f
,
g
⟩
=
∫
−
1
1
f
(
x
)
g
(
x
)
w
(
x
)
d
x
,
w
(
x
)
>
0
for
x
∈
(
−
1
,
1
)
(:f,g:)=int_(-1)^(1)f(x)g(x)w(x)dx,quad w(x) > 0quad" for "x in(-1,1) \langle f, g\rangle=\int_{-1}^{1} f(x) g(x) w(x) d x, \quad w(x)>0 \quad \text { for } x \in(-1,1) ⟨ f , g ⟩ = ∫ − 1 1 f ( x ) g ( x ) w ( x ) d x , w ( x ) > 0 for x ∈ ( − 1 , 1 )
where
p
i
(
x
)
p
i
(
x
)
p_(i)(x) p_{i}(x) p i ( x ) is a polynomial of degree
i
i
i i i . Let
x
0
,
x
1
,
…
,
x
n
x
0
,
x
1
,
…
,
x
n
x_(0),x_(1),dots,x_(n) x_{0}, x_{1}, \ldots, x_{n} x 0 , x 1 , … , x n be the roots of
p
n
+
1
(
x
)
p
n
+
1
(
x
)
p_(n+1)(x) p_{n+1}(x) p n + 1 ( x ) . Construct an orthonormal basis in the subspace of the polynomials of degree no more than
n
n
n n n such that, for any polynomial in this subspace, the coefficients of its expansion into the basis are equal to the scaled values of this polynomial at the nodes
x
0
,
x
1
,
…
,
x
n
x
0
,
x
1
,
…
,
x
n
x_(0),x_(1),dots,x_(n) x_{0}, x_{1}, \ldots, x_{n} x 0 , x 1 , … , x n .
Solution: Start by considering
l
i
(
x
)
,
i
=
0
,
…
,
n
l
i
(
x
)
,
i
=
0
,
…
,
n
l_(i)(x),i=0,dots,n l_{i}(x), i=0, \ldots, n l i ( x ) , i = 0 , … , n , the Lagrange interpolating polynomials of degree
n
n
n n n for the nodes
x
0
,
x
1
,
…
,
x
n
x
0
,
x
1
,
…
,
x
n
x_(0),x_(1),dots,x_(n) x_{0}, x_{1}, \ldots, x_{n} x 0 , x 1 , … , x n . Let us compute the inner product of two such polynomials using the Gaussian quadrature, exact for the polynomials of degree less or equal to
2
n
+
1
2
n
+
1
2n+1 2 n+1 2 n + 1 . We have
⟨
l
i
,
l
j
⟩
=
∫
−
1
1
l
i
(
x
)
l
j
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
l
i
(
x
k
)
l
j
(
x
k
)
w
k
=
δ
i
j
w
i
l
i
,
l
j
=
∫
−
1
1
l
i
(
x
)
l
j
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
l
i
x
k
l
j
x
k
w
k
=
δ
i
j
w
i
(:l_(i),l_(j):)=int_(-1)^(1)l_(i)(x)l_(j)(x)w(x)dx=sum_(k=0)^(n)l_(i)(x_(k))l_(j)(x_(k))w_(k)=delta_(ij)w_(i) \left\langle l_{i}, l_{j}\right\rangle=\int_{-1}^{1} l_{i}(x) l_{j}(x) w(x) d x=\sum_{k=0}^{n} l_{i}\left(x_{k}\right) l_{j}\left(x_{k}\right) w_{k}=\delta_{i j} w_{i} ⟨ l i , l j ⟩ = ∫ − 1 1 l i ( x ) l j ( x ) w ( x ) d x = ∑ k = 0 n l i ( x k ) l j ( x k ) w k = δ i j w i
where
w
0
,
w
1
,
…
,
w
n
w
0
,
w
1
,
…
,
w
n
w_(0),w_(1),dots,w_(n) w_{0}, w_{1}, \ldots, w_{n} w 0 , w 1 , … , w n are the positive weights of the quadrature.
We now normalize the Lagrange interpolating polynomials,
R
i
(
x
)
=
1
w
i
l
i
(
x
)
R
i
(
x
)
=
1
w
i
l
i
(
x
)
R_(i)(x)=(1)/(sqrt(w_(i)))l_(i)(x) R_{i}(x)=\frac{1}{\sqrt{w_{i}}} l_{i}(x) R i ( x ) = 1 w i l i ( x )
and obtain
∫
−
1
1
R
i
(
x
)
R
j
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
w
k
R
i
(
x
k
)
R
j
(
x
k
)
=
∑
k
=
0
n
w
k
1
w
i
δ
i
k
1
w
j
δ
j
k
=
δ
i
j
∫
−
1
1
R
i
(
x
)
R
j
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
w
k
R
i
x
k
R
j
x
k
=
∑
k
=
0
n
w
k
1
w
i
δ
i
k
1
w
j
δ
j
k
=
δ
i
j
int_(-1)^(1)R_(i)(x)R_(j)(x)w(x)dx=sum_(k=0)^(n)w_(k)R_(i)(x_(k))R_(j)(x_(k))=sum_(k=0)^(n)w_(k)(1)/(sqrt(w_(i)))delta_(ik)(1)/(sqrt(w_(j)))delta_(jk)=delta_(ij) \int_{-1}^{1} R_{i}(x) R_{j}(x) w(x) d x=\sum_{k=0}^{n} w_{k} R_{i}\left(x_{k}\right) R_{j}\left(x_{k}\right)=\sum_{k=0}^{n} w_{k} \frac{1}{\sqrt{w_{i}}} \delta_{i k} \frac{1}{\sqrt{w_{j}}} \delta_{j k}=\delta_{i j} ∫ − 1 1 R i ( x ) R j ( x ) w ( x ) d x = ∑ k = 0 n w k R i ( x k ) R j ( x k ) = ∑ k = 0 n w k 1 w i δ i k 1 w j δ j k = δ i j
namely, these functions form an orthonormal basis. The coefficients of a function in this subspace are computed as projections on the basis,
f
i
=
⟨
f
,
R
i
⟩
=
∫
−
1
1
f
(
x
)
R
i
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
w
k
f
(
x
k
)
R
i
(
x
k
)
=
∑
k
=
0
n
w
k
f
(
x
k
)
1
w
i
δ
i
k
=
w
i
f
(
x
i
)
f
i
=
f
,
R
i
=
∫
−
1
1
f
(
x
)
R
i
(
x
)
w
(
x
)
d
x
=
∑
k
=
0
n
w
k
f
x
k
R
i
x
k
=
∑
k
=
0
n
w
k
f
x
k
1
w
i
δ
i
k
=
w
i
f
x
i
f_(i)=(:f,R_(i):)=int_(-1)^(1)f(x)R_(i)(x)w(x)dx=sum_(k=0)^(n)w_(k)f(x_(k))R_(i)(x_(k))=sum_(k=0)^(n)w_(k)f(x_(k))(1)/(sqrt(w_(i)))delta_(ik)=sqrt(w_(i))f(x_(i)) f_{i}=\left\langle f, R_{i}\right\rangle=\int_{-1}^{1} f(x) R_{i}(x) w(x) d x=\sum_{k=0}^{n} w_{k} f\left(x_{k}\right) R_{i}\left(x_{k}\right)=\sum_{k=0}^{n} w_{k} f\left(x_{k}\right) \frac{1}{\sqrt{w_{i}}} \delta_{i k}=\sqrt{w_{i}} f\left(x_{i}\right) f i = ⟨ f , R i ⟩ = ∫ − 1 1 f ( x ) R i ( x ) w ( x ) d x = ∑ k = 0 n w k f ( x k ) R i ( x k ) = ∑ k = 0 n w k f ( x k ) 1 w i δ i k = w i f ( x i )
Problem 2. Consider a 2D fixed point iteration of the form
(1)
x
k
+
1
=
f
(
x
k
,
y
k
)
,
y
k
+
1
=
g
(
x
k
,
y
k
)
(1)
x
k
+
1
=
f
x
k
,
y
k
,
y
k
+
1
=
g
x
k
,
y
k
{:(1)x_(k+1)=f(x_(k),y_(k))","y_(k+1)=g(x_(k),y_(k)):} \begin{equation*}
x_{k+1}=f\left(x_{k}, y_{k}\right), y_{k+1}=g\left(x_{k}, y_{k}\right) \tag{1}
\end{equation*} (1) x k + 1 = f ( x k , y k ) , y k + 1 = g ( x k , y k )
Assume that the vector-valued function
H
→
(
x
,
y
)
=
(
f
(
x
,
y
)
,
g
(
x
,
y
)
)
T
H
→
(
x
,
y
)
=
(
f
(
x
,
y
)
,
g
(
x
,
y
)
)
T
vec(H)(x,y)=(f(x,y),g(x,y))^(T) \vec{H}(x, y)=(f(x, y), g(x, y))^{T} H → ( x , y ) = ( f ( x , y ) , g ( x , y ) ) T is continuously-differentiable, and the infinity norm of the Jacobian matrix is less than 1 at a unique fixed point
(
x
∞
,
y
∞
)
x
∞
,
y
∞
(x_(oo),y_(oo)) \left(x_{\infty}, y_{\infty}\right) ( x ∞ , y ∞ ) .
Now consider a new iteration:
(2)
x
k
+
1
=
f
(
x
k
,
y
k
)
,
y
k
+
1
=
g
(
x
k
+
1
,
y
k
)
(2)
x
k
+
1
=
f
x
k
,
y
k
,
y
k
+
1
=
g
x
k
+
1
,
y
k
{:(2)x_(k+1)=f(x_(k),y_(k))","quady_(k+1)=g(x_(k+1),y_(k)):} \begin{equation*}
x_{k+1}=f\left(x_{k}, y_{k}\right), \quad y_{k+1}=g\left(x_{k+1}, y_{k}\right) \tag{2}
\end{equation*} (2) x k + 1 = f ( x k , y k ) , y k + 1 = g ( x k + 1 , y k )
Prove that iteration (2) is convergent, to the same fixed point as iteration (1), for the initial conditions sufficiently close to the fixed point.
Solution: First, we check that the new iteration has the same fixed point as the original iteration. For (1),
x
∞
=
f
(
x
∞
,
y
∞
)
x
∞
=
f
x
∞
,
y
∞
x_(oo)=f(x_(oo),y_(oo)) x_{\infty}=f\left(x_{\infty}, y_{\infty}\right) x ∞ = f ( x ∞ , y ∞ ) ,
y
∞
=
g
(
x
∞
,
y
∞
)
y
∞
=
g
x
∞
,
y
∞
y_(oo)=g(x_(oo),y_(oo)) y_{\infty}=g\left(x_{\infty}, y_{\infty}\right) y ∞ = g ( x ∞ , y ∞ ) . Thus, we have for the new iteration,
f
(
x
∞
,
y
∞
)
=
x
∞
g
(
f
(
x
∞
,
y
∞
)
,
y
∞
)
=
g
(
x
∞
,
y
∞
)
=
y
∞
f
x
∞
,
y
∞
=
x
∞
g
f
x
∞
,
y
∞
,
y
∞
=
g
x
∞
,
y
∞
=
y
∞
{:[f(x_(oo),y_(oo))=x_(oo)],[g(f(x_(oo),y_(oo)),y_(oo))=g(x_(oo),y_(oo))=y_(oo)]:} \begin{array}{r}
f\left(x_{\infty}, y_{\infty}\right)=x_{\infty} \\
g\left(f\left(x_{\infty}, y_{\infty}\right), y_{\infty}\right)=g\left(x_{\infty}, y_{\infty}\right)=y_{\infty}
\end{array} f ( x ∞ , y ∞ ) = x ∞ g ( f ( x ∞ , y ∞ ) , y ∞ ) = g ( x ∞ , y ∞ ) = y ∞
Next, the Jacobian of the new iteration reads as
(3)
J
2
=
[
∂
1
f
(
x
,
y
)
∂
2
f
(
x
,
y
)
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
1
f
(
x
,
y
)
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
2
f
(
x
,
y
)
+
∂
2
g
(
f
(
x
,
y
)
,
y
)
]
(3)
J
2
=
∂
1
f
(
x
,
y
)
∂
2
f
(
x
,
y
)
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
1
f
(
x
,
y
)
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
2
f
(
x
,
y
)
+
∂
2
g
(
f
(
x
,
y
)
,
y
)
{:(3)J_(2)=[[del_(1)f(x","y),del_(2)f(x","y)],[del_(1)g(f(x","y)","y)del_(1)f(x","y),del_(1)g(f(x","y)","y)del_(2)f(x","y)+del_(2)g(f(x","y)","y)]]:} \mathbf{J}_{2}=\left[\begin{array}{ll}
\partial_{1} f(x, y) & \partial_{2} f(x, y) \tag{3}\\
\partial_{1} g(f(x, y), y) \partial_{1} f(x, y) & \partial_{1} g(f(x, y), y) \partial_{2} f(x, y)+\partial_{2} g(f(x, y), y)
\end{array}\right] (3) J 2 = [ ∂ 1 f ( x , y ) ∂ 2 f ( x , y ) ∂ 1 g ( f ( x , y ) , y ) ∂ 1 f ( x , y ) ∂ 1 g ( f ( x , y ) , y ) ∂ 2 f ( x , y ) + ∂ 2 g ( f ( x , y ) , y ) ]
The infinity norm of the Jacobian is the maximum absolute row sum. The first row has exactly the same absolute row sum as the Jacobian of the original iteration, thus we have
|
∂
1
f
(
x
∞
,
y
∞
)
|
+
|
∂
2
f
(
x
∞
,
y
∞
)
|
<
1
∂
1
f
x
∞
,
y
∞
+
∂
2
f
x
∞
,
y
∞
<
1
|del_(1)f(x_(oo),y_(oo))|+|del_(2)f(x_(oo),y_(oo))| < 1 \left|\partial_{1} f\left(x_{\infty}, y_{\infty}\right)\right|+\left|\partial_{2} f\left(x_{\infty}, y_{\infty}\right)\right|<1 | ∂ 1 f ( x ∞ , y ∞ ) | + | ∂ 2 f ( x ∞ , y ∞ ) | < 1
The absolute row sum for the second row is
|
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
1
f
(
x
,
y
)
|
+
|
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
2
f
(
x
,
y
)
+
∂
2
g
(
f
(
x
,
y
)
,
y
)
|
≤
|
∂
1
g
(
f
(
x
,
y
)
,
y
)
|
(
|
∂
1
f
(
x
,
y
)
|
+
|
∂
2
f
(
x
,
y
)
|
)
+
|
∂
2
g
(
f
(
x
,
y
)
,
y
)
|
.
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
1
f
(
x
,
y
)
+
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
2
f
(
x
,
y
)
+
∂
2
g
(
f
(
x
,
y
)
,
y
)
≤
∂
1
g
(
f
(
x
,
y
)
,
y
)
∂
1
f
(
x
,
y
)
+
∂
2
f
(
x
,
y
)
+
∂
2
g
(
f
(
x
,
y
)
,
y
)
.
{:[|del_(1)g(f(x,y),y)del_(1)f(x,y)|+|del_(1)g(f(x,y),y)del_(2)f(x,y)+del_(2)g(f(x,y),y)|],[ <= |del_(1)g(f(x,y),y)|(|del_(1)f(x,y)|+|del_(2)f(x,y)|)+|del_(2)g(f(x,y),y)|.]:} \begin{aligned}
& \left|\partial_{1} g(f(x, y), y) \partial_{1} f(x, y)\right|+\left|\partial_{1} g(f(x, y), y) \partial_{2} f(x, y)+\partial_{2} g(f(x, y), y)\right| \\
& \leq\left|\partial_{1} g(f(x, y), y)\right|\left(\left|\partial_{1} f(x, y)\right|+\left|\partial_{2} f(x, y)\right|\right)+\left|\partial_{2} g(f(x, y), y)\right| .
\end{aligned} | ∂ 1 g ( f ( x , y ) , y ) ∂ 1 f ( x , y ) | + | ∂ 1 g ( f ( x , y ) , y ) ∂ 2 f ( x , y ) + ∂ 2 g ( f ( x , y ) , y ) | ≤ | ∂ 1 g ( f ( x , y ) , y ) | ( | ∂ 1 f ( x , y ) | + | ∂ 2 f ( x , y ) | ) + | ∂ 2 g ( f ( x , y ) , y ) | .
Evaluating at
(
x
∞
,
y
∞
)
x
∞
,
y
∞
(x_(oo),y_(oo)) \left(x_{\infty}, y_{\infty}\right) ( x ∞ , y ∞ ) , we have
|
∂
1
g
(
f
(
x
∞
,
y
∞
)
,
y
∞
)
|
(
|
∂
1
f
(
x
∞
,
y
∞
)
|
+
|
∂
2
f
(
x
∞
,
y
∞
)
|
)
+
|
∂
2
g
(
f
(
x
∞
,
y
∞
)
,
y
∞
)
|
≤
|
∂
1
g
(
f
(
x
∞
,
y
∞
)
,
y
∞
)
|
+
|
∂
2
g
(
f
(
x
∞
,
y
∞
)
,
y
∞
)
|
=
|
∂
1
g
(
x
∞
,
y
∞
)
|
+
|
∂
2
g
(
x
∞
,
y
∞
)
|
∂
1
g
f
x
∞
,
y
∞
,
y
∞
∂
1
f
x
∞
,
y
∞
+
∂
2
f
x
∞
,
y
∞
+
∂
2
g
f
x
∞
,
y
∞
,
y
∞
≤
∂
1
g
f
x
∞
,
y
∞
,
y
∞
+
∂
2
g
f
x
∞
,
y
∞
,
y
∞
=
∂
1
g
x
∞
,
y
∞
+
∂
2
g
x
∞
,
y
∞
{:[|del_(1)g(f(x_(oo),y_(oo)),y_(oo))|(|del_(1)f(x_(oo),y_(oo))|+|del_(2)f(x_(oo),y_(oo))|)+|del_(2)g(f(x_(oo),y_(oo)),y_(oo))|],[ <= |del_(1)g(f(x_(oo),y_(oo)),y_(oo))|+|del_(2)g(f(x_(oo),y_(oo)),y_(oo))|],[=|del_(1)g(x_(oo),y_(oo))|+|del_(2)g(x_(oo),y_(oo))|]:} \begin{aligned}
& \left|\partial_{1} g\left(f\left(x_{\infty}, y_{\infty}\right), y_{\infty}\right)\right|\left(\left|\partial_{1} f\left(x_{\infty}, y_{\infty}\right)\right|+\left|\partial_{2} f\left(x_{\infty}, y_{\infty}\right)\right|\right)+\left|\partial_{2} g\left(f\left(x_{\infty}, y_{\infty}\right), y_{\infty}\right)\right| \\
& \leq\left|\partial_{1} g\left(f\left(x_{\infty}, y_{\infty}\right), y_{\infty}\right)\right|+\left|\partial_{2} g\left(f\left(x_{\infty}, y_{\infty}\right), y_{\infty}\right)\right| \\
& =\left|\partial_{1} g\left(x_{\infty}, y_{\infty}\right)\right|+\left|\partial_{2} g\left(x_{\infty}, y_{\infty}\right)\right|
\end{aligned} | ∂ 1 g ( f ( x ∞ , y ∞ ) , y ∞ ) | ( | ∂ 1 f ( x ∞ , y ∞ ) | + | ∂ 2 f ( x ∞ , y ∞ ) | ) + | ∂ 2 g ( f ( x ∞ , y ∞ ) , y ∞ ) | ≤ | ∂ 1 g ( f ( x ∞ , y ∞ ) , y ∞ ) | + | ∂ 2 g ( f ( x ∞ , y ∞ ) , y ∞ ) | = | ∂ 1 g ( x ∞ , y ∞ ) | + | ∂ 2 g ( x ∞ , y ∞ ) |
Thus, the Jacobian of the new iteration has infinity norm less than 1 at the fixed point. Since the new iteration is continuouslydifferentiable, there must be a neighborhood of the fixed point such that iterations initialized in this neighborhood will converge.
Problem 3. Let
A
∈
R
m
×
m
A
∈
R
m
×
m
A inR^(mxxm) A \in \mathbf{R}^{\mathbf{m} \times \mathbf{m}} A ∈ R m × m be a matrix with entries
a
i
j
a
i
j
a_(ij) a_{i j} a i j which satisfy
a
i
i
≥
∑
j
≠
i
|
a
i
j
|
+
2
,
a
i
i
≤
7
a
i
i
≥
∑
j
≠
i
a
i
j
+
2
,
a
i
i
≤
7
a_(ii) >= sum_(j!=i)|a_(ij)|+2,quada_(ii) <= 7 a_{i i} \geq \sum_{j \neq i}\left|a_{i j}\right|+2, \quad a_{i i} \leq 7 a i i ≥ ∑ j ≠ i | a i j | + 2 , a i i ≤ 7
(a) Prove that
A
−
1
A
−
1
A^(-1) A^{-1} A − 1 exists.
(b) Prove that
∥
A
∥
∞
∥
A
∥
∞
||A||_(oo) \|A\|_{\infty} ∥ A ∥ ∞ is the max row sum (of absolute values) of
A
A
A A A .
(c) Find both a lower and upper bound for
∥
A
∥
∞
∥
A
∥
∞
||A||_(oo) \|A\|_{\infty} ∥ A ∥ ∞ .
(d) Now assume
A
=
A
T
A
=
A
T
A=A^(T) A=A^{T} A = A T . Find bounds for
∥
A
∥
2
∥
A
∥
2
||A||_(2) \|A\|_{2} ∥ A ∥ 2 and
∥
A
−
1
∥
2
A
−
1
2
||A^(-1)||_(2) \left\|A^{-1}\right\|_{2} ∥ A − 1 ∥ 2 .
Solution:
(a) Suppose
A
A
A A A is singular and
v
=
[
v
1
v
2
⋯
v
m
]
T
v
=
v
1
v
2
⋯
v
m
T
v=[[v_(1),v_(2),cdots,v_(m)]]^(T) v=\left[\begin{array}{llll}v_{1} & v_{2} & \cdots & v_{m}\end{array}\right]^{T} v = [ v 1 v 2 ⋯ v m ] T be an eigenvector corresponding to eigenvalue 0 with
max
j
|
v
j
|
=
|
v
k
|
max
j
v
j
=
v
k
max_(j)|v_(j)|=|v_(k)| \max _{j}\left|v_{j}\right|=\left|v_{k}\right| max j | v j | = | v k | . Consider the
k
k
k k k -th row of the vector equation
A
v
=
0
A
v
=
0
Av=0 A v=0 A v = 0 ,
a
k
k
v
k
=
−
∑
j
≠
k
a
k
j
v
j
a
k
k
v
k
=
−
∑
j
≠
k
a
k
j
v
j
a_(kk)v_(k)=-sum_(j!=k)a_(kj)v_(j) a_{k k} v_{k}=-\sum_{j \neq k} a_{k j} v_{j} a k k v k = − ∑ j ≠ k a k j v j
Therefore,
|
a
k
k
|
≤
∑
j
≠
k
|
a
k
j
|
|
v
j
v
k
|
≤
∑
j
≠
k
|
a
k
j
|
a
k
k
≤
∑
j
≠
k
a
k
j
v
j
v
k
≤
∑
j
≠
k
a
k
j
|a_(kk)| <= sum_(j!=k)|a_(kj)||(v_(j))/(v_(k))| <= sum_(j!=k)|a_(kj)| \left|a_{k k}\right| \leq \sum_{j \neq k}\left|a_{k j}\right|\left|\frac{v_{j}}{v_{k}}\right| \leq \sum_{j \neq k}\left|a_{k j}\right| | a k k | ≤ ∑ j ≠ k | a k j | | v j v k | ≤ ∑ j ≠ k | a k j |
which is a contradiction.
(b) Suppose the
k
k
k k k -th row has the max absolute sum, i.e.,
max
i
{
∑
j
|
a
i
j
|
}
=
∑
j
|
a
k
j
|
max
i
∑
j
a
i
j
=
∑
j
a
k
j
max_(i){sum_(j)|a_(ij)|}=sum_(j)|a_(kj)| \max _{i}\left\{\sum_{j}\left|a_{i j}\right|\right\}=\sum_{j}\left|a_{k j}\right| max i { ∑ j | a i j | } = ∑ j | a k j |
For the vector
v
v
v v v with
v
j
=
sgn
(
a
k
j
)
v
j
=
sgn
a
k
j
v_(j)=sgn(a_(kj)) v_{j}=\operatorname{sgn}\left(a_{k j}\right) v j = sgn ( a k j ) , we have
∥
A
v
∥
∞
≥
∑
j
|
a
k
j
|
∥
A
v
∥
∞
≥
∑
j
a
k
j
||Av||_(oo) >= sum_(j)|a_(kj)| \|A v\|_{\infty} \geq \sum_{j}\left|a_{k j}\right| ∥ A v ∥ ∞ ≥ ∑ j | a k j |
since the right hand side is the
k
k
k k k -th element of
A
v
A
v
Av A v A v . Noting that
∥
v
∥
∞
=
1
∥
v
∥
∞
=
1
||v||_(oo)=1 \|v\|_{\infty}=1 ∥ v ∥ ∞ = 1 , we have
∥
A
∥
∞
≥
∑
j
|
a
k
j
|
∥
A
∥
∞
≥
∑
j
a
k
j
||A||_(oo) >= sum_(j)|a_(kj)| \|A\|_{\infty} \geq \sum_{j}\left|a_{k j}\right| ∥ A ∥ ∞ ≥ ∑ j | a k j | . For the other inequality, we have that for any vector
u
u
u u u ,
∥
A
u
∥
∞
=
max
i
{
∑
j
∣
a
i
j
∥
u
j
∥
}
≤
max
i
{
∑
j
|
a
i
j
|
}
∥
u
∥
∞
∥
A
u
∥
∞
=
max
i
∑
j
∣
a
i
j
u
j
≤
max
i
∑
j
a
i
j
∥
u
∥
∞
||Au||_(oo)=max_(i){sum_(j)∣a_(ij)||u_(j)||} <= max_(i){sum_(j)|a_(ij)|}||u||_(oo) \|A u\|_{\infty}=\max _{i}\left\{\sum_{j} \mid a_{i j}\left\|u_{j}\right\|\right\} \leq \max _{i}\left\{\sum_{j}\left|a_{i j}\right|\right\}\|u\|_{\infty} ∥ A u ∥ ∞ = max i { ∑ j ∣ a i j ∥ u j ∥ } ≤ max i { ∑ j | a i j | } ∥ u ∥ ∞
(c) From given information,
2
≤
∑
j
|
a
i
j
|
=
a
i
i
+
∑
j
≠
i
|
a
i
j
|
≤
a
i
i
+
(
a
i
i
−
2
)
≤
12
2
≤
∑
j
a
i
j
=
a
i
i
+
∑
j
≠
i
a
i
j
≤
a
i
i
+
a
i
i
−
2
≤
12
2 <= sum_(j)|a_(ij)|=a_(ii)+sum_(j!=i)|a_(ij)| <= a_(ii)+(a_(ii)-2) <= 12 2 \leq \sum_{j}\left|a_{i j}\right|=a_{i i}+\sum_{j \neq i}\left|a_{i j}\right| \leq a_{i i}+\left(a_{i i}-2\right) \leq 12 2 ≤ ∑ j | a i j | = a i i + ∑ j ≠ i | a i j | ≤ a i i + ( a i i − 2 ) ≤ 12
Therefore,
2
≤
∥
A
∥
∞
≤
12
2
≤
∥
A
∥
∞
≤
12
2 <= ||A||_(oo) <= 12 2 \leq\|A\|_{\infty} \leq 12 2 ≤ ∥ A ∥ ∞ ≤ 12 .
(d)
A
=
A
T
A
=
A
T
A=A^(T) A=A^{T} A = A T means singular values are the absolute values of (real) eigenvalues. By the Gershgorin Theorem,
|
λ
−
a
i
i
|
≤
∑
j
≠
i
|
a
i
j
|
λ
−
a
i
i
≤
∑
j
≠
i
a
i
j
|lambda-a_(ii)| <= sum_(j!=i)|a_(ij)| \left|\lambda-a_{i i}\right| \leq \sum_{j \neq i}\left|a_{i j}\right| | λ − a i i | ≤ ∑ j ≠ i | a i j |
Therefore,
a
i
i
−
∑
j
≠
i
|
a
i
j
|
≤
λ
≤
a
i
i
+
∑
j
≠
i
|
a
i
j
|
a
i
i
−
∑
j
≠
i
a
i
j
≤
λ
≤
a
i
i
+
∑
j
≠
i
a
i
j
a_(ii)-sum_(j!=i)|a_(ij)| <= lambda <= a_(ii)+sum_(j!=i)|a_(ij)| a_{i i}-\sum_{j \neq i}\left|a_{i j}\right| \leq \lambda \leq a_{i i}+\sum_{j \neq i}\left|a_{i j}\right| a i i − ∑ j ≠ i | a i j | ≤ λ ≤ a i i + ∑ j ≠ i | a i j |
By given info.
2
≤
λ
≤
12
2
≤
λ
≤
12
2 <= lambda <= 12 2 \leq \lambda \leq 12 2 ≤ λ ≤ 12
Now since
A
=
A
T
A
=
A
T
A=A^(T) A=A^{T} A = A T and
A
A
A A A is invertible, the smallest and largest singular values of
A
−
1
A
−
1
A^(-1) A^{-1} A − 1 will be the reciprocal of the largest and smallest singular values of
A
A
A A A respectively. Hence,
1
12
≤
∥
A
−
1
∥
2
≤
1
2
1
12
≤
A
−
1
2
≤
1
2
(1)/(12) <= ||A^(-1)||_(2) <= (1)/(2) \frac{1}{12} \leq\left\|A^{-1}\right\|_{2} \leq \frac{1}{2} 1 12 ≤ ∥ A − 1 ∥ 2 ≤ 1 2
Problem 4. Consider a system of ODE initial value problems of the form:
d
d
t
u
=
f
(
u
)
,
u
(
0
)
=
u
0
d
d
t
u
=
f
(
u
)
,
u
(
0
)
=
u
0
(d)/(dt)u=f(u),quad u(0)=u_(0) \frac{d}{d t} u=f(u), \quad u(0)=u_{0} d d t u = f ( u ) , u ( 0 ) = u 0
Assume that
f
(
u
)
f
(
u
)
f(u) f(u) f ( u ) has the property that the forward Euler (FE) method:
U
n
+
1
=
U
n
+
k
f
(
U
n
)
U
n
+
1
=
U
n
+
k
f
U
n
U^(n+1)=U^(n)+kf(U^(n)) U^{n+1}=U^{n}+k f\left(U^{n}\right) U n + 1 = U n + k f ( U n )
satisfies
∥
U
n
+
1
∥
≤
∥
U
n
∥
U
n
+
1
≤
U
n
||U^(n+1)|| <= ||U^(n)|| \left\|U^{n+1}\right\| \leq\left\|U^{n}\right\| ∥ U n + 1 ∥ ≤ ∥ U n ∥
for some norm
∥
⋅
∥
∥
⋅
∥
||*|| \|\cdot\| ∥ ⋅ ∥ and for all time-steps
k
,
0
<
k
≤
k
F
E
k
,
0
<
k
≤
k
F
E
k,0 < k <= k_(FE) k, 0<k \leq k_{F E} k , 0 < k ≤ k F E . Now consider the 2-stage Runge-Kutta method:
U
(
1
)
=
U
n
+
k
β
10
f
(
U
n
)
U
n
+
1
=
{
α
20
U
n
+
k
β
20
f
(
U
n
)
}
+
{
α
21
U
(
1
)
+
k
β
21
f
(
U
(
1
)
)
}
U
(
1
)
=
U
n
+
k
β
10
f
U
n
U
n
+
1
=
α
20
U
n
+
k
β
20
f
U
n
+
α
21
U
(
1
)
+
k
β
21
f
U
(
1
)
{:[U^((1))=U^(n)+kbeta_(10)f(U^(n))],[U^(n+1)={alpha_(20)U^(n)+kbeta_(20)f(U^(n))}+{alpha_(21)U^((1))+kbeta_(21)f(U^((1)))}]:} \begin{aligned}
U^{(1)} & =U^{n}+k \beta_{10} f\left(U^{n}\right) \\
U^{n+1} & =\left\{\alpha_{20} U^{n}+k \beta_{20} f\left(U^{n}\right)\right\}+\left\{\alpha_{21} U^{(1)}+k \beta_{21} f\left(U^{(1)}\right)\right\}
\end{aligned} U ( 1 ) = U n + k β 10 f ( U n ) U n + 1 = { α 20 U n + k β 20 f ( U n ) } + { α 21 U ( 1 ) + k β 21 f ( U ( 1 ) ) }
where
β
10
≥
0
,
β
20
≥
0
,
β
21
≥
0
,
α
20
≥
0
,
α
21
≥
0
,
α
20
+
α
21
=
1
β
10
≥
0
,
β
20
≥
0
,
β
21
≥
0
,
α
20
≥
0
,
α
21
≥
0
,
α
20
+
α
21
=
1
beta_(10) >= 0,quadbeta_(20) >= 0,quadbeta_(21) >= 0,quadalpha_(20) >= 0,quadalpha_(21) >= 0,quadalpha_(20)+alpha_(21)=1 \beta_{10} \geq 0, \quad \beta_{20} \geq 0, \quad \beta_{21} \geq 0, \quad \alpha_{20} \geq 0, \quad \alpha_{21} \geq 0, \quad \alpha_{20}+\alpha_{21}=1 β 10 ≥ 0 , β 20 ≥ 0 , β 21 ≥ 0 , α 20 ≥ 0 , α 21 ≥ 0 , α 20 + α 21 = 1
(a) Prove that the above 2-stage Runge-Kutta method also satisfies the inequality:
∥
U
n
+
1
∥
≤
∥
U
n
∥
U
n
+
1
≤
U
n
||U^(n+1)|| <= ||U^(n)|| \left\|U^{n+1}\right\| \leq\left\|U^{n}\right\| ∥ U n + 1 ∥ ≤ ∥ U n ∥
under some appropriate time-step restriction:
0
≤
k
≤
k
∗
0
≤
k
≤
k
∗
0 <= k <= k^(**) 0 \leq k \leq k^{*} 0 ≤ k ≤ k ∗ , where you need to explicitly determine
k
∗
k
∗
k^(**) k^{*} k ∗ in terms of
k
F
E
k
F
E
k_(FE) k_{F E} k F E .
(b) Explicitly determine the coefficients:
β
10
,
β
20
,
β
21
,
α
20
,
α
21
β
10
,
β
20
,
β
21
,
α
20
,
α
21
beta_(10),quadbeta_(20),quadbeta_(21),quadalpha_(20),quadalpha_(21) \beta_{10}, \quad \beta_{20}, \quad \beta_{21}, \quad \alpha_{20}, \quad \alpha_{21} β 10 , β 20 , β 21 , α 20 , α 21
so that
(i) The method is second-order accurate; and
(ii) The maximum allowed time-step,
k
∗
k
∗
k^(**) k^{*} k ∗ , is as large as possible.
Solution:
(a) The first stage is simply the forward Euler method with a time step
k
β
10
k
β
10
kbeta_(10) k \beta_{10} k β 10 . Therefore, as long as
k
⋅
max
{
1
,
β
10
}
≤
k
F
E
,
k
⋅
max
1
,
β
10
≤
k
F
E
,
k*max{1,beta_(10)} <= k_(FE), k \cdot \max \left\{1, \beta_{10}\right\} \leq k_{F E}, k ⋅ max { 1 , β 10 } ≤ k F E ,
we have that
∥
U
(
1
)
∥
≤
∥
U
n
∥
U
(
1
)
≤
U
n
||U^((1))|| <= ||U^(n)|| \left\|U^{(1)}\right\| \leq\left\|U^{n}\right\| ∥ U ( 1 ) ∥ ≤ ∥ U n ∥
The second stage can be written as a linear combination of two forward Euler steps:
U
n
+
1
=
α
20
{
U
n
+
k
β
20
α
20
f
(
U
n
)
}
+
α
21
{
U
(
1
)
+
k
β
21
α
21
f
(
U
(
1
)
)
}
.
U
n
+
1
=
α
20
U
n
+
k
β
20
α
20
f
U
n
+
α
21
U
(
1
)
+
k
β
21
α
21
f
U
(
1
)
.
U^(n+1)=alpha_(20){U^(n)+k(beta_(20))/(alpha_(20))f(U^(n))}+alpha_(21){U^((1))+k(beta_(21))/(alpha_(21))f(U^((1)))}. U^{n+1}=\alpha_{20}\left\{U^{n}+k \frac{\beta_{20}}{\alpha_{20}} f\left(U^{n}\right)\right\}+\alpha_{21}\left\{U^{(1)}+k \frac{\beta_{21}}{\alpha_{21}} f\left(U^{(1)}\right)\right\} . U n + 1 = α 20 { U n + k β 20 α 20 f ( U n ) } + α 21 { U ( 1 ) + k β 21 α 21 f ( U ( 1 ) ) } .
Requiring that
k
⋅
max
{
1
,
β
10
,
β
20
α
20
,
β
21
α
21
}
≤
k
F
E
k
⋅
max
1
,
β
10
,
β
20
α
20
,
β
21
α
21
≤
k
F
E
k*max{1,beta_(10),(beta_(20))/(alpha_(20)),(beta_(21))/(alpha_(21))} <= k_(FE) k \cdot \max \left\{1, \beta_{10}, \frac{\beta_{20}}{\alpha_{20}}, \frac{\beta_{21}}{\alpha_{21}}\right\} \leq k_{F E} k ⋅ max { 1 , β 10 , β 20 α 20 , β 21 α 21 } ≤ k F E
we get that
∥
U
n
+
1
∥
≤
α
20
∥
U
n
∥
+
α
21
∥
U
(
1
)
∥
≤
(
α
20
+
α
21
)
∥
U
n
∥
=
∥
U
n
∥
U
n
+
1
≤
α
20
U
n
+
α
21
U
(
1
)
≤
α
20
+
α
21
U
n
=
U
n
||U^(n+1)|| <= alpha_(20)||U^(n)||+alpha_(21)||U^((1))|| <= (alpha_(20)+alpha_(21))||U^(n)||=||U^(n)|| \left\|U^{n+1}\right\| \leq \alpha_{20}\left\|U^{n}\right\|+\alpha_{21}\left\|U^{(1)}\right\| \leq\left(\alpha_{20}+\alpha_{21}\right)\left\|U^{n}\right\|=\left\|U^{n}\right\| ∥ U n + 1 ∥ ≤ α 20 ∥ U n ∥ + α 21 ∥ U ( 1 ) ∥ ≤ ( α 20 + α 21 ) ∥ U n ∥ = ∥ U n ∥
Therefore, the 2-stage RK method satisfies
∥
U
(
1
)
∥
≤
∥
U
n
∥
U
(
1
)
≤
U
n
||U^((1))|| <= ||U^(n)|| \left\|U^{(1)}\right\| \leq\left\|U^{n}\right\| ∥ U ( 1 ) ∥ ≤ ∥ U n ∥ under the following time-step constraint:
k
≤
k
∗
=
k
F
E
⋅
min
{
1
,
1
β
10
,
α
20
β
20
,
α
21
β
21
}
k
≤
k
∗
=
k
F
E
⋅
min
1
,
1
β
10
,
α
20
β
20
,
α
21
β
21
k <= k^(**)=k_(FE)*min{1,(1)/(beta_(10)),(alpha_(20))/(beta_(20)),(alpha_(21))/(beta_(21))} k \leq k^{*}=k_{F E} \cdot \min \left\{1, \frac{1}{\beta_{10}}, \frac{\alpha_{20}}{\beta_{20}}, \frac{\alpha_{21}}{\beta_{21}}\right\} k ≤ k ∗ = k F E ⋅ min { 1 , 1 β 10 , α 20 β 20 , α 21 β 21 }
(b) To see the local truncation error, apply the method to the function
f
(
u
)
=
λ
u
f
(
u
)
=
λ
u
f(u)=lambda u f(u)=\lambda u f ( u ) = λ u (and let
z
=
k
λ
z
=
k
λ
z=k lambda z=k \lambda z = k λ ):
U
(
1
)
=
(
1
+
z
β
10
)
U
n
U
n
+
1
=
α
20
(
1
+
z
β
20
α
20
)
U
n
+
α
21
(
1
+
z
β
21
α
21
)
U
(
1
)
U
(
1
)
=
1
+
z
β
10
U
n
U
n
+
1
=
α
20
1
+
z
β
20
α
20
U
n
+
α
21
1
+
z
β
21
α
21
U
(
1
)
{:[U^((1))=(1+zbeta_(10))U^(n)],[U^(n+1)=alpha_(20)(1+z(beta_(20))/(alpha_(20)))U^(n)+alpha_(21)(1+z(beta_(21))/(alpha_(21)))U^((1))]:} \begin{aligned}
U^{(1)} & =\left(1+z \beta_{10}\right) U^{n} \\
U^{n+1} & =\alpha_{20}\left(1+z \frac{\beta_{20}}{\alpha_{20}}\right) U^{n}+\alpha_{21}\left(1+z \frac{\beta_{21}}{\alpha_{21}}\right) U^{(1)}
\end{aligned} U ( 1 ) = ( 1 + z β 10 ) U n U n + 1 = α 20 ( 1 + z β 20 α 20 ) U n + α 21 ( 1 + z β 21 α 21 ) U ( 1 )
Combining these two results (and using the fact that
α
20
+
α
21
=
1
α
20
+
α
21
=
1
alpha_(20)+alpha_(21)=1 \alpha_{20}+\alpha_{21}=1 α 20 + α 21 = 1 ):
U
n
+
1
=
(
1
+
z
(
β
20
+
β
21
+
α
21
β
10
)
+
z
2
2
(
2
β
10
β
21
)
)
U
n
U
n
+
1
=
1
+
z
β
20
+
β
21
+
α
21
β
10
+
z
2
2
2
β
10
β
21
U
n
U^(n+1)=(1+z(beta_(20)+beta_(21)+alpha_(21)beta_(10))+(z^(2))/(2)(2beta_(10)beta_(21)))U^(n) U^{n+1}=\left(1+z\left(\beta_{20}+\beta_{21}+\alpha_{21} \beta_{10}\right)+\frac{z^{2}}{2}\left(2 \beta_{10} \beta_{21}\right)\right) U^{n} U n + 1 = ( 1 + z ( β 20 + β 21 + α 21 β 10 ) + z 2 2 ( 2 β 10 β 21 ) ) U n
Therefore, for accuracy considerations we require that
β
20
+
β
21
+
α
21
β
10
=
1
and
β
10
β
21
=
1
2
.
β
20
+
β
21
+
α
21
β
10
=
1
and
β
10
β
21
=
1
2
.
beta_(20)+beta_(21)+alpha_(21)beta_(10)=1" and "beta_(10)beta_(21)=(1)/(2)". " \beta_{20}+\beta_{21}+\alpha_{21} \beta_{10}=1 \text { and } \beta_{10} \beta_{21}=\frac{1}{2} \text {. } β 20 + β 21 + α 21 β 10 = 1 and β 10 β 21 = 1 2 .
For optimal stability we require that
0
≤
β
10
≤
1
,
0
≤
β
20
≤
α
20
≤
1
,
0
≤
β
21
≤
α
21
≤
1
,
α
20
+
α
21
=
1
.
0
≤
β
10
≤
1
,
0
≤
β
20
≤
α
20
≤
1
,
0
≤
β
21
≤
α
21
≤
1
,
α
20
+
α
21
=
1
.
0 <= beta_(10) <= 1,quad0 <= beta_(20) <= alpha_(20) <= 1,quad0 <= beta_(21) <= alpha_(21) <= 1,quadalpha_(20)+alpha_(21)=1. 0 \leq \beta_{10} \leq 1, \quad 0 \leq \beta_{20} \leq \alpha_{20} \leq 1, \quad 0 \leq \beta_{21} \leq \alpha_{21} \leq 1, \quad \alpha_{20}+\alpha_{21}=1 . 0 ≤ β 10 ≤ 1 , 0 ≤ β 20 ≤ α 20 ≤ 1 , 0 ≤ β 21 ≤ α 21 ≤ 1 , α 20 + α 21 = 1 .
The optimal solution requires that
β
10
=
1
⟹
β
21
=
1
2
⟹
β
20
+
α
21
=
1
2
⇒
α
21
=
1
2
⟹
β
20
=
0
and
α
20
=
1
2
β
10
=
1
⟹
β
21
=
1
2
⟹
β
20
+
α
21
=
1
2
⇒
α
21
=
1
2
⟹
β
20
=
0
and
α
20
=
1
2
beta_(10)=1Longrightarrowbeta_(21)=(1)/(2)Longrightarrowbeta_(20)+alpha_(21)=(1)/(2)=>alpha_(21)=(1)/(2)Longrightarrowbeta_(20)=0quad" and "quadalpha_(20)=(1)/(2) \beta_{10}=1 \Longrightarrow \beta_{21}=\frac{1}{2} \Longrightarrow \beta_{20}+\alpha_{21}=\frac{1}{2} \Rightarrow \alpha_{21}=\frac{1}{2} \Longrightarrow \beta_{20}=0 \quad \text { and } \quad \alpha_{20}=\frac{1}{2} β 10 = 1 ⟹ β 21 = 1 2 ⟹ β 20 + α 21 = 1 2 ⇒ α 21 = 1 2 ⟹ β 20 = 0 and α 20 = 1 2
Putting these all together yields the following 2 -stage RK method that is norm-preserving under the optimal timestep restriction
k
≤
k
∗
k
≤
k
∗
k <= k^(**) k \leq k^{*} k ≤ k ∗ :
U
(
1
)
=
U
n
+
k
f
(
U
n
)
U
n
+
1
=
1
2
{
U
n
+
U
(
1
)
+
k
f
(
U
(
1
)
)
}
U
(
1
)
=
U
n
+
k
f
U
n
U
n
+
1
=
1
2
U
n
+
U
(
1
)
+
k
f
U
(
1
)
{:[U^((1))=U^(n)+kf(U^(n))],[U^(n+1)=(1)/(2){U^(n)+U^((1))+kf(U^((1)))}]:} \begin{aligned}
U^{(1)} & =U^{n}+k f\left(U^{n}\right) \\
U^{n+1} & =\frac{1}{2}\left\{U^{n}+U^{(1)}+k f\left(U^{(1)}\right)\right\}
\end{aligned} U ( 1 ) = U n + k f ( U n ) U n + 1 = 1 2 { U n + U ( 1 ) + k f ( U ( 1 ) ) }
Problem 5. Construct a third-order accurate Lax-Wendroff-type method for
u
t
+
a
u
x
=
0
u
t
+
a
u
x
=
0
u_(t)+au_(x)=0 u_{t}+a u_{x}=0 u t + a u x = 0 (
a
>
0
a
>
0
a > 0 a>0 a > 0 is a constant) in the following way:
(a) - Expand
u
(
t
+
k
,
x
)
u
(
t
+
k
,
x
)
u(t+k,x) u(t+k, x) u ( t + k , x ) in a Taylor series and keep the first four terms. Replace all time derivatives by spatial derivatives using the equation.
Construct a cubic polynomial passing through the points
U
j
−
2
n
,
U
j
−
1
n
,
U
j
n
,
U
j
+
1
n
U
j
−
2
n
,
U
j
−
1
n
,
U
j
n
,
U
j
+
1
n
U_(j-2)^(n),U_(j-1)^(n),U_(j)^(n),U_(j+1)^(n) U_{j-2}^{n}, U_{j-1}^{n}, U_{j}^{n}, U_{j+1}^{n} U j − 2 n , U j − 1 n , U j n , U j + 1 n .
Approximate the spatial derivatives in the Taylor series by the exact derivatives of the above constructed cubic polynomial.
(b) Verify that the truncation error is
O
(
k
3
)
O
k
3
O(k^(3)) O\left(k^{3}\right) O ( k 3 ) if
h
=
O
(
k
)
h
=
O
(
k
)
h=O(k) h=O(k) h = O ( k ) .
Solution:
(a) By Taylor's expansion in time, and using the equation, we have
u
(
t
+
k
,
x
)
=
u
(
t
,
x
)
+
k
u
t
(
t
,
x
)
+
k
2
2
u
t
t
(
t
,
x
)
+
k
3
6
u
t
t
t
(
t
,
x
)
+
O
(
k
4
)
=
u
(
t
,
x
)
−
a
k
u
x
(
t
,
x
)
+
(
a
k
)
2
2
u
x
x
(
t
,
x
)
−
(
a
k
)
3
6
u
x
x
x
(
t
,
x
)
+
O
(
k
4
)
u
(
t
+
k
,
x
)
=
u
(
t
,
x
)
+
k
u
t
(
t
,
x
)
+
k
2
2
u
t
t
(
t
,
x
)
+
k
3
6
u
t
t
t
(
t
,
x
)
+
O
k
4
=
u
(
t
,
x
)
−
a
k
u
x
(
t
,
x
)
+
(
a
k
)
2
2
u
x
x
(
t
,
x
)
−
(
a
k
)
3
6
u
x
x
x
(
t
,
x
)
+
O
k
4
{:[u(t+k","x)=u(t","x)+ku_(t)(t","x)+(k^(2))/(2)u_(tt)(t","x)+(k^(3))/(6)u_(ttt)(t","x)+O(k^(4))],[=u(t","x)-aku_(x)(t","x)+((ak)^(2))/(2)u_(xx)(t","x)-((ak)^(3))/(6)u_(xxx)(t","x)+O(k^(4))]:} \begin{aligned}
u(t+k, x) & =u(t, x)+k u_{t}(t, x)+\frac{k^{2}}{2} u_{t t}(t, x)+\frac{k^{3}}{6} u_{t t t}(t, x)+O\left(k^{4}\right) \\
& =u(t, x)-a k u_{x}(t, x)+\frac{(a k)^{2}}{2} u_{x x}(t, x)-\frac{(a k)^{3}}{6} u_{x x x}(t, x)+O\left(k^{4}\right)
\end{aligned} u ( t + k , x ) = u ( t , x ) + k u t ( t , x ) + k 2 2 u t t ( t , x ) + k 3 6 u t t t ( t , x ) + O ( k 4 ) = u ( t , x ) − a k u x ( t , x ) + ( a k ) 2 2 u x x ( t , x ) − ( a k ) 3 6 u x x x ( t , x ) + O ( k 4 )
Use Lagrange interpolation to construct a cubic polynomial passing through
x
0
−
2
h
,
x
0
−
h
,
x
0
,
x
0
+
h
x
0
−
2
h
,
x
0
−
h
,
x
0
,
x
0
+
h
x_(0)-2h,x_(0)-h,x_(0),x_(0)+h x_{0}-2 h, x_{0}-h, x_{0}, x_{0}+h x 0 − 2 h , x 0 − h , x 0 , x 0 + h . Since the interpolation is in the spatial variable, we omit the time variable for this part. The polynomial
p
3
(
x
)
p
3
(
x
)
p_(3)(x) p_{3}(x) p 3 ( x ) satisfying
u
=
u
=
u= u= u =
p
3
+
O
(
h
4
)
p
3
+
O
h
4
p_(3)+O(h^(4)) p_{3}+O\left(h^{4}\right) p 3 + O ( h 4 ) is,
p
3
(
x
)
=
−
(
x
−
x
0
+
h
)
(
x
−
x
0
)
(
x
−
x
0
−
h
)
6
h
3
u
(
x
0
−
2
h
)
+
(
x
−
x
0
+
2
h
)
(
x
−
x
0
)
(
x
−
x
0
−
h
)
2
h
3
u
(
x
0
−
h
)
−
(
x
−
x
0
+
2
h
)
(
x
−
x
0
+
h
)
(
x
−
x
0
−
h
)
2
h
3
u
(
x
0
)
+
(
x
−
x
0
+
2
h
)
(
x
−
x
0
+
h
)
(
x
−
x
0
)
6
h
3
u
(
x
0
+
h
)
.
p
3
(
x
)
=
−
x
−
x
0
+
h
x
−
x
0
x
−
x
0
−
h
6
h
3
u
x
0
−
2
h
+
x
−
x
0
+
2
h
x
−
x
0
x
−
x
0
−
h
2
h
3
u
x
0
−
h
−
x
−
x
0
+
2
h
x
−
x
0
+
h
x
−
x
0
−
h
2
h
3
u
x
0
+
x
−
x
0
+
2
h
x
−
x
0
+
h
x
−
x
0
6
h
3
u
x
0
+
h
.
{:[p_(3)(x)=-((x-x_(0)+h)(x-x_(0))(x-x_(0)-h))/(6h^(3))u(x_(0)-2h)],[+((x-x_(0)+2h)(x-x_(0))(x-x_(0)-h))/(2h^(3))u(x_(0)-h)],[-((x-x_(0)+2h)(x-x_(0)+h)(x-x_(0)-h))/(2h^(3))u(x_(0))],[+((x-x_(0)+2h)(x-x_(0)+h)(x-x_(0)))/(6h^(3))u(x_(0)+h).]:} \begin{aligned}
p_{3}(x)= & -\frac{\left(x-x_{0}+h\right)\left(x-x_{0}\right)\left(x-x_{0}-h\right)}{6 h^{3}} u\left(x_{0}-2 h\right) \\
& +\frac{\left(x-x_{0}+2 h\right)\left(x-x_{0}\right)\left(x-x_{0}-h\right)}{2 h^{3}} u\left(x_{0}-h\right) \\
& -\frac{\left(x-x_{0}+2 h\right)\left(x-x_{0}+h\right)\left(x-x_{0}-h\right)}{2 h^{3}} u\left(x_{0}\right) \\
& +\frac{\left(x-x_{0}+2 h\right)\left(x-x_{0}+h\right)\left(x-x_{0}\right)}{6 h^{3}} u\left(x_{0}+h\right) .
\end{aligned} p 3 ( x ) = − ( x − x 0 + h ) ( x − x 0 ) ( x − x 0 − h ) 6 h 3 u ( x 0 − 2 h ) + ( x − x 0 + 2 h ) ( x − x 0 ) ( x − x 0 − h ) 2 h 3 u ( x 0 − h ) − ( x − x 0 + 2 h ) ( x − x 0 + h ) ( x − x 0 − h ) 2 h 3 u ( x 0 ) + ( x − x 0 + 2 h ) ( x − x 0 + h ) ( x − x 0 ) 6 h 3 u ( x 0 + h ) .
Also, the error is
u
′
(
x
0
)
=
p
3
′
(
x
0
)
+
O
(
h
3
)
u
′
′
(
x
0
)
=
p
3
′
′
(
x
0
)
+
O
(
h
2
)
u
′
′
′
(
x
0
)
=
p
3
′
′
′
(
x
0
)
+
O
(
h
)
u
′
x
0
=
p
3
′
x
0
+
O
h
3
u
′
′
x
0
=
p
3
′
′
x
0
+
O
h
2
u
′
′
′
x
0
=
p
3
′
′
′
x
0
+
O
(
h
)
{:[u^(')(x_(0))=p_(3)^(')(x_(0))+O(h^(3))],[u^('')(x_(0))=p_(3)^('')(x_(0))+O(h^(2))],[u^(''')(x_(0))=p_(3)^(''')(x_(0))+O(h)]:} \begin{aligned}
& u^{\prime}\left(x_{0}\right)=p_{3}^{\prime}\left(x_{0}\right)+O\left(h^{3}\right) \\
& u^{\prime \prime}\left(x_{0}\right)=p_{3}^{\prime \prime}\left(x_{0}\right)+O\left(h^{2}\right) \\
& u^{\prime \prime \prime}\left(x_{0}\right)=p_{3}^{\prime \prime \prime}\left(x_{0}\right)+O(h)
\end{aligned} u ′ ( x 0 ) = p 3 ′ ( x 0 ) + O ( h 3 ) u ′ ′ ( x 0 ) = p 3 ′ ′ ( x 0 ) + O ( h 2 ) u ′ ′ ′ ( x 0 ) = p 3 ′ ′ ′ ( x 0 ) + O ( h )
Now we need to substitute the expressions of
p
3
′
(
x
0
)
,
p
3
′
′
(
x
0
)
,
p
3
′
′
′
(
x
0
)
p
3
′
x
0
,
p
3
′
′
x
0
,
p
3
′
′
′
x
0
p_(3)^(')(x_(0)),p_(3)^('')(x_(0)),p_(3)^(''')(x_(0)) p_{3}^{\prime}\left(x_{0}\right), p_{3}^{\prime \prime}\left(x_{0}\right), p_{3}^{\prime \prime \prime}\left(x_{0}\right) p 3 ′ ( x 0 ) , p 3 ′ ′ ( x 0 ) , p 3 ′ ′ ′ ( x 0 ) in place of
u
x
,
u
x
x
,
u
x
x
x
u
x
,
u
x
x
,
u
x
x
x
u_(x),u_(xx),u_(xxx) u_{x}, u_{x x}, u_{x x x} u x , u x x , u x x x respectively. By simple calculation,
p
3
′
(
x
0
)
=
u
(
x
0
−
2
h
)
6
h
−
u
(
x
0
−
h
)
h
+
u
(
x
0
)
2
h
+
u
(
x
0
+
h
)
3
h
p
3
′
′
(
x
0
)
=
u
(
x
0
−
h
)
h
2
−
2
u
(
x
0
)
h
2
+
u
(
x
0
+
h
)
h
2
p
3
′
′
′
(
x
0
)
=
−
u
(
x
0
−
2
h
)
h
3
+
3
u
(
x
0
−
h
)
h
3
−
3
u
(
x
0
)
h
3
+
u
(
x
0
+
h
)
h
3
p
3
′
x
0
=
u
x
0
−
2
h
6
h
−
u
x
0
−
h
h
+
u
x
0
2
h
+
u
x
0
+
h
3
h
p
3
′
′
x
0
=
u
x
0
−
h
h
2
−
2
u
x
0
h
2
+
u
x
0
+
h
h
2
p
3
′
′
′
x
0
=
−
u
x
0
−
2
h
h
3
+
3
u
x
0
−
h
h
3
−
3
u
x
0
h
3
+
u
x
0
+
h
h
3
{:[p_(3)^(')(x_(0))=(u(x_(0)-2h))/(6h)-(u(x_(0)-h))/(h)+(u(x_(0)))/(2h)+(u(x_(0)+h))/(3h)],[p_(3)^('')(x_(0))=(u(x_(0)-h))/(h^(2))-2(u(x_(0)))/(h^(2))+(u(x_(0)+h))/(h^(2))],[p_(3)^(''')(x_(0))=-(u(x_(0)-2h))/(h^(3))+3(u(x_(0)-h))/(h^(3))-3(u(x_(0)))/(h^(3))+(u(x_(0)+h))/(h^(3))]:} \begin{aligned}
& p_{3}^{\prime}\left(x_{0}\right)=\frac{u\left(x_{0}-2 h\right)}{6 h}-\frac{u\left(x_{0}-h\right)}{h}+\frac{u\left(x_{0}\right)}{2 h}+\frac{u\left(x_{0}+h\right)}{3 h} \\
& p_{3}^{\prime \prime}\left(x_{0}\right)=\frac{u\left(x_{0}-h\right)}{h^{2}}-2 \frac{u\left(x_{0}\right)}{h^{2}}+\frac{u\left(x_{0}+h\right)}{h^{2}} \\
& p_{3}^{\prime \prime \prime}\left(x_{0}\right)=-\frac{u\left(x_{0}-2 h\right)}{h^{3}}+3 \frac{u\left(x_{0}-h\right)}{h^{3}}-3 \frac{u\left(x_{0}\right)}{h^{3}}+\frac{u\left(x_{0}+h\right)}{h^{3}}
\end{aligned} p 3 ′ ( x 0 ) = u ( x 0 − 2 h ) 6 h − u ( x 0 − h ) h + u ( x 0 ) 2 h + u ( x 0 + h ) 3 h p 3 ′ ′ ( x 0 ) = u ( x 0 − h ) h 2 − 2 u ( x 0 ) h 2 + u ( x 0 + h ) h 2 p 3 ′ ′ ′ ( x 0 ) = − u ( x 0 − 2 h ) h 3 + 3 u ( x 0 − h ) h 3 − 3 u ( x 0 ) h 3 + u ( x 0 + h ) h 3
Plugging these expressions into the Taylor expression, we obtain
U
j
n
+
1
=
U
j
n
−
a
k
6
h
(
U
j
−
2
n
−
6
U
j
−
1
n
+
3
U
j
n
+
2
U
j
+
1
n
)
+
(
a
k
)
2
2
h
2
(
U
j
−
1
n
−
2
U
j
n
+
U
j
+
1
n
)
−
(
a
k
)
3
6
h
3
(
−
U
j
−
2
n
+
3
U
j
−
1
n
−
3
U
j
n
+
U
j
+
1
n
)
U
j
n
+
1
=
U
j
n
−
a
k
6
h
U
j
−
2
n
−
6
U
j
−
1
n
+
3
U
j
n
+
2
U
j
+
1
n
+
(
a
k
)
2
2
h
2
U
j
−
1
n
−
2
U
j
n
+
U
j
+
1
n
−
(
a
k
)
3
6
h
3
−
U
j
−
2
n
+
3
U
j
−
1
n
−
3
U
j
n
+
U
j
+
1
n
{:[U_(j)^(n+1)=U_(j)^(n)-(ak)/(6h)(U_(j-2)^(n)-6U_(j-1)^(n)+3U_(j)^(n)+2U_(j+1)^(n))],[+((ak)^(2))/(2h^(2))(U_(j-1)^(n)-2U_(j)^(n)+U_(j+1)^(n))],[-((ak)^(3))/(6h^(3))(-U_(j-2)^(n)+3U_(j-1)^(n)-3U_(j)^(n)+U_(j+1)^(n))]:} \begin{aligned}
U_{j}^{n+1}= & U_{j}^{n}-\frac{a k}{6 h}\left(U_{j-2}^{n}-6 U_{j-1}^{n}+3 U_{j}^{n}+2 U_{j+1}^{n}\right) \\
& +\frac{(a k)^{2}}{2 h^{2}}\left(U_{j-1}^{n}-2 U_{j}^{n}+U_{j+1}^{n}\right) \\
& -\frac{(a k)^{3}}{6 h^{3}}\left(-U_{j-2}^{n}+3 U_{j-1}^{n}-3 U_{j}^{n}+U_{j+1}^{n}\right)
\end{aligned} U j n + 1 = U j n − a k 6 h ( U j − 2 n − 6 U j − 1 n + 3 U j n + 2 U j + 1 n ) + ( a k ) 2 2 h 2 ( U j − 1 n − 2 U j n + U j + 1 n ) − ( a k ) 3 6 h 3 ( − U j − 2 n + 3 U j − 1 n − 3 U j n + U j + 1 n )
(b) From the Taylor expression and the approximation errors,
u
(
t
+
k
,
x
)
−
u
(
t
,
x
)
k
=
−
a
u
x
+
a
2
k
2
u
x
x
−
a
3
k
2
6
h
u
x
x
x
+
O
(
k
3
)
=
−
a
6
h
(
u
(
t
,
x
−
2
h
)
−
6
u
(
t
,
x
−
h
)
+
3
u
(
t
,
x
)
+
2
u
(
t
,
x
+
h
)
)
+
O
(
h
3
)
+
a
2
k
2
h
2
(
u
(
t
,
x
−
h
)
−
2
u
(
t
,
x
)
+
u
(
t
,
x
+
h
)
)
+
O
(
k
h
2
)
−
a
3
k
2
6
h
3
(
−
u
(
t
,
x
−
2
h
)
+
3
u
(
t
,
x
−
h
)
−
3
u
(
t
,
x
)
+
u
(
t
,
x
+
h
)
)
+
O
(
k
2
h
)
+
O
(
k
3
)
u
(
t
+
k
,
x
)
−
u
(
t
,
x
)
k
=
−
a
u
x
+
a
2
k
2
u
x
x
−
a
3
k
2
6
h
u
x
x
x
+
O
k
3
=
−
a
6
h
(
u
(
t
,
x
−
2
h
)
−
6
u
(
t
,
x
−
h
)
+
3
u
(
t
,
x
)
+
2
u
(
t
,
x
+
h
)
)
+
O
h
3
+
a
2
k
2
h
2
(
u
(
t
,
x
−
h
)
−
2
u
(
t
,
x
)
+
u
(
t
,
x
+
h
)
)
+
O
k
h
2
−
a
3
k
2
6
h
3
(
−
u
(
t
,
x
−
2
h
)
+
3
u
(
t
,
x
−
h
)
−
3
u
(
t
,
x
)
+
u
(
t
,
x
+
h
)
)
+
O
k
2
h
+
O
k
3
{:[(u(t+k,x)-u(t,x))/(k)=-au_(x)+(a^(2)k)/(2)u_(xx)-(a^(3)k^(2))/(6h)u_(xxx)+O(k^(3))],[=-(a)/(6h)(u(t","x-2h)-6u(t","x-h)+3u(t","x)+2u(t","x+h))+O(h^(3))],[+(a^(2)k)/(2h^(2))(u(t","x-h)-2u(t","x)+u(t","x+h))+O(kh^(2))],[-(a^(3)k^(2))/(6h^(3))(-u(t","x-2h)+3u(t","x-h)-3u(t","x)+u(t","x+h))+O(k^(2)h)],[+O(k^(3))]:} \begin{aligned}
\frac{u(t+k, x)-u(t, x)}{k}= & -a u_{x}+\frac{a^{2} k}{2} u_{x x}-\frac{a^{3} k^{2}}{6 h} u_{x x x}+O\left(k^{3}\right) \\
= & -\frac{a}{6 h}(u(t, x-2 h)-6 u(t, x-h)+3 u(t, x)+2 u(t, x+h))+O\left(h^{3}\right) \\
& +\frac{a^{2} k}{2 h^{2}}(u(t, x-h)-2 u(t, x)+u(t, x+h))+O\left(k h^{2}\right) \\
& -\frac{a^{3} k^{2}}{6 h^{3}}(-u(t, x-2 h)+3 u(t, x-h)-3 u(t, x)+u(t, x+h))+O\left(k^{2} h\right) \\
& +O\left(k^{3}\right)
\end{aligned} u ( t + k , x ) − u ( t , x ) k = − a u x + a 2 k 2 u x x − a 3 k 2 6 h u x x x + O ( k 3 ) = − a 6 h ( u ( t , x − 2 h ) − 6 u ( t , x − h ) + 3 u ( t , x ) + 2 u ( t , x + h ) ) + O ( h 3 ) + a 2 k 2 h 2 ( u ( t , x − h ) − 2 u ( t , x ) + u ( t , x + h ) ) + O ( k h 2 ) − a 3 k 2 6 h 3 ( − u ( t , x − 2 h ) + 3 u ( t , x − h ) − 3 u ( t , x ) + u ( t , x + h ) ) + O ( k 2 h ) + O ( k 3 )
Hence, if
h
=
O
(
k
)
h
=
O
(
k
)
h=O(k) h=O(k) h = O ( k ) , then scheme is
O
(
k
3
)
O
k
3
O(k^(3)) O\left(k^{3}\right) O ( k 3 ) .
Problem 6. Suppose you have
$
60
K
$
60
K
$60K \$ 60 \mathrm{~K} $ 60 K to invest and there are 3 investment options available. You must invest in multiples of
$
10
K
$
10
K
$10 K \$ 10 K $ 10 K . If
d
i
d
i
d_(i) d_{i} d i dollars are invested in investment
i
i
i i i then you receive a net value (as the profit) of
r
i
(
d
i
)
r
i
d
i
r_(i)(d_(i)) r_{i}\left(d_{i}\right) r i ( d i ) dollars. For
d
i
>
0
d
i
>
0
d_(i) > 0 d_{i}>0 d i > 0 we have
r
1
(
d
1
)
=
(
7
d
1
+
2
)
×
10
r
2
(
d
2
)
=
(
3
d
2
+
7
)
×
10
r
3
(
d
3
)
=
(
4
d
3
+
5
)
×
10
r
1
d
1
=
7
d
1
+
2
×
10
r
2
d
2
=
3
d
2
+
7
×
10
r
3
d
3
=
4
d
3
+
5
×
10
{:[r_(1)(d_(1))=(7d_(1)+2)xx10],[r_(2)(d_(2))=(3d_(2)+7)xx10],[r_(3)(d_(3))=(4d_(3)+5)xx10]:} \begin{aligned}
& r_{1}\left(d_{1}\right)=\left(7 d_{1}+2\right) \times 10 \\
& r_{2}\left(d_{2}\right)=\left(3 d_{2}+7\right) \times 10 \\
& r_{3}\left(d_{3}\right)=\left(4 d_{3}+5\right) \times 10
\end{aligned} r 1 ( d 1 ) = ( 7 d 1 + 2 ) × 10 r 2 ( d 2 ) = ( 3 d 2 + 7 ) × 10 r 3 ( d 3 ) = ( 4 d 3 + 5 ) × 10
and
d
1
(
0
)
=
d
2
(
0
)
=
d
3
(
0
)
d
1
(
0
)
=
d
2
(
0
)
=
d
3
(
0
)
d_(1)(0)=d_(2)(0)=d_(3)(0) d_{1}(0)=d_{2}(0)=d_{3}(0) d 1 ( 0 ) = d 2 ( 0 ) = d 3 ( 0 ) . All are measured in
$
10
K
$
10
K
$10 K \$ 10 K $ 10 K dollars. The objective is to maximize the net value of your investments. This can be formulated as a linear programming problem:
max
d
1
,
d
2
,
d
3
r
1
(
d
1
)
+
r
2
(
d
2
)
+
r
3
(
d
3
)
such that
d
1
+
d
2
+
d
3
≤
6
max
d
1
,
d
2
,
d
3
r
1
d
1
+
r
2
d
2
+
r
3
d
3
such that
d
1
+
d
2
+
d
3
≤
6
{:[max_(d_(1),d_(2),d_(3))r_(1)(d_(1))+r_(2)(d_(2))+r_(3)(d_(3))],[" such that "d_(1)+d_(2)+d_(3) <= 6]:} \begin{aligned}
& \max _{d_{1}, d_{2}, d_{3}} r_{1}\left(d_{1}\right)+r_{2}\left(d_{2}\right)+r_{3}\left(d_{3}\right) \\
& \text { such that } d_{1}+d_{2}+d_{3} \leq 6
\end{aligned} max d 1 , d 2 , d 3 r 1 ( d 1 ) + r 2 ( d 2 ) + r 3 ( d 3 ) such that d 1 + d 2 + d 3 ≤ 6
d
i
≥
0
i
=
1
,
2
,
3
are integers.
d
i
≥
0
i
=
1
,
2
,
3
are integers.
d_(i) >= 0quad i=1,2,3" are integers. " d_{i} \geq 0 \quad i=1,2,3 \text { are integers. } d i ≥ 0 i = 1 , 2 , 3 are integers.
Solution: We solve this problem by dynamical programming. The key elements are
The stage
i
i
i i i is just investment
i
i
i i i .
The decisions at stage
i
i
i i i are
d
i
d
i
d_(i) d_{i} d i which is how much to be invested in
i
i
i i i . The immediate return is
r
i
(
d
i
)
r
i
d
i
r_(i)(d_(i)) r_{i}\left(d_{i}\right) r i ( d i ) and it is obvious that
d
i
∈
{
0
,
1
,
…
,
6
}
d
i
∈
{
0
,
1
,
…
,
6
}
d_(i)in{0,1,dots,6} d_{i} \in\{0,1, \ldots, 6\} d i ∈ { 0 , 1 , … , 6 } .
The states at stage
i
i
i i i is
x
i
=
x
i
=
x_(i)= x_{i}= x i = the total amount available to be invested to investment
i
i
i i i .
Define
f
i
(
x
i
)
=
f
i
x
i
=
f_(i)(x_(i))= f_{i}\left(x_{i}\right)= f i ( x i ) = maximum net value for stages
i
i
i i i given that we have
x
i
x
i
x_(i) x_{i} x i dollars available. Then the recursion equation is
f
i
(
x
i
)
=
max
d
i
{
r
i
(
d
i
)
+
f
i
+
1
(
x
i
−
d
i
)
}
f
i
x
i
=
max
d
i
r
i
d
i
+
f
i
+
1
x
i
−
d
i
f_(i)(x_(i))=max_(d_(i)){r_(i)(d_(i))+f_(i+1)(x_(i)-d_(i))} f_{i}\left(x_{i}\right)=\max _{d_{i}}\left\{r_{i}\left(d_{i}\right)+f_{i+1}\left(x_{i}-d_{i}\right)\right\} f i ( x i ) = max d i { r i ( d i ) + f i + 1 ( x i − d i ) }
The boundary condition is
f
4
(
x
4
)
=
0
f
4
x
4
=
0
f_(4)(x_(4))=0 f_{4}\left(x_{4}\right)=0 f 4 ( x 4 ) = 0 . The answer is
f
1
(
6
)
f
1
(
6
)
f_(1)(6) f_{1}(6) f 1 ( 6 ) . This is a backward recursion and the computation goes as:
Stage 3: Note that
f
3
(
x
3
)
=
max
0
≤
d
3
≤
6
{
r
3
(
d
3
)
}
f
3
x
3
=
max
0
≤
d
3
≤
6
r
3
d
3
f_(3)(x_(3))=max_(0 <= d_(3) <= 6){r_(3)(d_(3))} f_{3}\left(x_{3}\right)=\max _{0 \leq d_{3} \leq 6}\left\{r_{3}\left(d_{3}\right)\right\} f 3 ( x 3 ) = max 0 ≤ d 3 ≤ 6 { r 3 ( d 3 ) }
We have the following table:
Table 1:
r
3
(
d
3
)
=
4
d
3
+
5
r
3
d
3
=
4
d
3
+
5
r_(3)(d_(3))=4d_(3)+5 r_{3}\left(d_{3}\right)=4 d_{3}+5 r 3 ( d 3 ) = 4 d 3 + 5
x
3
x
3
x_(3) x_{3} x 3
d
3
=
0
d
3
=
0
d_(3)=0 d_{3}=0 d 3 = 0
d
3
=
1
d
3
=
1
d_(3)=1 d_{3}=1 d 3 = 1
d
3
=
2
d
3
=
2
d_(3)=2 d_{3}=2 d 3 = 2
d
3
=
3
d
3
=
3
d_(3)=3 d_{3}=3 d 3 = 3
d
3
=
4
d
3
=
4
d_(3)=4 d_{3}=4 d 3 = 4
d
3
=
5
d
3
=
5
d_(3)=5 d_{3}=5 d 3 = 5
d
3
=
6
d
3
=
6
d_(3)=6 d_{3}=6 d 3 = 6
f
3
(
x
3
)
f
3
x
3
f_(3)(x_(3)) f_{3}\left(x_{3}\right) f 3 ( x 3 )
d
3
∗
d
3
∗
d_(3)^(**) d_{3}^{*} d 3 ∗
0
0
-
-
-
-
-
-
0
0
1
0
9
-
-
-
-
-
9
1
2
0
9
13
-
-
-
-
13
2
3
0
9
13
17
-
-
-
17
3
4
0
9
13
17
21
-
-
21
4
5
0
9
13
17
21
25
-
25
5
6
0
9
13
17
21
25
29
29
6
x_(3) d_(3)=0 d_(3)=1 d_(3)=2 d_(3)=3 d_(3)=4 d_(3)=5 d_(3)=6 f_(3)(x_(3)) d_(3)^(**)
0 0 - - - - - - 0 0
1 0 9 - - - - - 9 1
2 0 9 13 - - - - 13 2
3 0 9 13 17 - - - 17 3
4 0 9 13 17 21 - - 21 4
5 0 9 13 17 21 25 - 25 5
6 0 9 13 17 21 25 29 29 6 | $x_{3}$ | $d_{3}=0$ | $d_{3}=1$ | $d_{3}=2$ | $d_{3}=3$ | $d_{3}=4$ | $d_{3}=5$ | $d_{3}=6$ | $f_{3}\left(x_{3}\right)$ | $d_{3}^{*}$ |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| 0 | 0 | - | - | - | - | - | - | 0 | 0 |
| 1 | 0 | 9 | - | - | - | - | - | 9 | 1 |
| 2 | 0 | 9 | 13 | - | - | - | - | 13 | 2 |
| 3 | 0 | 9 | 13 | 17 | - | - | - | 17 | 3 |
| 4 | 0 | 9 | 13 | 17 | 21 | - | - | 21 | 4 |
| 5 | 0 | 9 | 13 | 17 | 21 | 25 | - | 25 | 5 |
| 6 | 0 | 9 | 13 | 17 | 21 | 25 | 29 | 29 | 6 |
Stage 2: Note that
f
2
(
x
2
)
=
max
0
≤
d
2
≤
6
{
r
2
(
d
2
)
+
f
3
(
x
2
−
d
2
)
}
f
2
x
2
=
max
0
≤
d
2
≤
6
r
2
d
2
+
f
3
x
2
−
d
2
f_(2)(x_(2))=max_(0 <= d_(2) <= 6){r_(2)(d_(2))+f_(3)(x_(2)-d_(2))} f_{2}\left(x_{2}\right)=\max _{0 \leq d_{2} \leq 6}\left\{r_{2}\left(d_{2}\right)+f_{3}\left(x_{2}-d_{2}\right)\right\} f 2 ( x 2 ) = max 0 ≤ d 2 ≤ 6 { r 2 ( d 2 ) + f 3 ( x 2 − d 2 ) }
We have the following table
Table 2:
r
2
(
d
2
)
+
f
3
(
x
2
−
d
2
)
=
3
d
2
+
7
+
f
3
(
x
2
−
d
2
)
r
2
d
2
+
f
3
x
2
−
d
2
=
3
d
2
+
7
+
f
3
x
2
−
d
2
r_(2)(d_(2))+f_(3)(x_(2)-d_(2))=3d_(2)+7+f_(3)(x_(2)-d_(2)) r_{2}\left(d_{2}\right)+f_{3}\left(x_{2}-d_{2}\right)=3 d_{2}+7+f_{3}\left(x_{2}-d_{2}\right) r 2 ( d 2 ) + f 3 ( x 2 − d 2 ) = 3 d 2 + 7 + f 3 ( x 2 − d 2 )
Table 2:
r
2
(
d
2
)
+
f
3
(
x
2
−
d
2
)
=
3
d
2
+
7
+
f
3
(
x
2
−
d
2
)
r
2
d
2
+
f
3
x
2
−
d
2
=
3
d
2
+
7
+
f
3
x
2
−
d
2
r_(2)(d_(2))+f_(3)(x_(2)-d_(2))=3d_(2)+7+f_(3)(x_(2)-d_(2)) r_{2}\left(d_{2}\right)+f_{3}\left(x_{2}-d_{2}\right)=3 d_{2}+7+f_{3}\left(x_{2}-d_{2}\right) r 2 ( d 2 ) + f 3 ( x 2 − d 2 ) = 3 d 2 + 7 + f 3 ( x 2 − d 2 )
x
2
x
2
x_(2) x_{2} x 2
d
2
=
0
d
2
=
0
d_(2)=0 d_{2}=0 d 2 = 0
d
2
=
1
d
2
=
1
d_(2)=1 d_{2}=1 d 2 = 1
d
2
=
2
d
2
=
2
d_(2)=2 d_{2}=2 d 2 = 2
d
2
=
3
d
2
=
3
d_(2)=3 d_{2}=3 d 2 = 3
d
2
=
4
d
2
=
4
d_(2)=4 d_{2}=4 d 2 = 4
d
2
=
5
d
2
=
5
d_(2)=5 d_{2}=5 d 2 = 5
d
2
=
6
d
2
=
6
d_(2)=6 d_{2}=6 d 2 = 6
f
2
(
x
2
)
f
2
x
2
f_(2)(x_(2)) f_{2}\left(x_{2}\right) f 2 ( x 2 )
d
2
∗
d
2
∗
d_(2)^(**) d_{2}^{*} d 2 ∗
0
0
-
-
-
-
-
-
0
0
1
9
10
-
-
-
-
-
10
1
2
13
19
13
-
-
-
-
19
1
3
17
23
22
16
-
-
-
23
1
4
21
27
26
25
19
-
-
27
1
5
25
31
30
29
28
22
-
31
1
6
29
35
34
33
32
31
25
35
1
Table 2: r_(2)(d_(2))+f_(3)(x_(2)-d_(2))=3d_(2)+7+f_(3)(x_(2)-d_(2))
x_(2) d_(2)=0 d_(2)=1 d_(2)=2 d_(2)=3 d_(2)=4 d_(2)=5 d_(2)=6 f_(2)(x_(2)) d_(2)^(**)
0 0 - - - - - - 0 0
1 9 10 - - - - - 10 1
2 13 19 13 - - - - 19 1
3 17 23 22 16 - - - 23 1
4 21 27 26 25 19 - - 27 1
5 25 31 30 29 28 22 - 31 1
6 29 35 34 33 32 31 25 35 1 | Table 2: $r_{2}\left(d_{2}\right)+f_{3}\left(x_{2}-d_{2}\right)=3 d_{2}+7+f_{3}\left(x_{2}-d_{2}\right)$ | | | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| $x_{2}$ | $d_{2}=0$ | $d_{2}=1$ | $d_{2}=2$ | $d_{2}=3$ | $d_{2}=4$ | $d_{2}=5$ | $d_{2}=6$ | $f_{2}\left(x_{2}\right)$ | $d_{2}^{*}$ |
| 0 | 0 | - | - | - | - | - | - | 0 | 0 |
| 1 | 9 | 10 | - | - | - | - | - | 10 | 1 |
| 2 | 13 | 19 | 13 | - | - | - | - | 19 | 1 |
| 3 | 17 | 23 | 22 | 16 | - | - | - | 23 | 1 |
| 4 | 21 | 27 | 26 | 25 | 19 | - | - | 27 | 1 |
| 5 | 25 | 31 | 30 | 29 | 28 | 22 | - | 31 | 1 |
| 6 | 29 | 35 | 34 | 33 | 32 | 31 | 25 | 35 | 1 |
Stage 1: Note that
f
1
(
x
1
)
=
max
0
≤
d
1
≤
6
{
r
1
(
d
1
)
+
f
2
(
6
−
d
1
)
}
f
1
x
1
=
max
0
≤
d
1
≤
6
r
1
d
1
+
f
2
6
−
d
1
f_(1)(x_(1))=max_(0 <= d_(1) <= 6){r_(1)(d_(1))+f_(2)(6-d_(1))} f_{1}\left(x_{1}\right)=\max _{0 \leq d_{1} \leq 6}\left\{r_{1}\left(d_{1}\right)+f_{2}\left(6-d_{1}\right)\right\} f 1 ( x 1 ) = max 0 ≤ d 1 ≤ 6 { r 1 ( d 1 ) + f 2 ( 6 − d 1 ) }
We have the following table
Thus the optimal net present value is
$
49000
$
49000
$49000 \$ 49000 $ 49000 and the investment policy is
d
1
=
4
,
d
2
=
1
,
d
3
=
1
d
1
=
4
,
d
2
=
1
,
d
3
=
1
d_(1)=4,d_(2)=1,d_(3)=1 d_{1}=4, d_{2}=1, d_{3}=1 d 1 = 4 , d 2 = 1 , d 3 = 1 .
Table 3:
r
1
(
d
1
)
+
f
2
(
x
1
−
d
1
)
=
7
d
1
+
2
+
f
2
(
6
−
d
1
)
r
1
d
1
+
f
2
x
1
−
d
1
=
7
d
1
+
2
+
f
2
6
−
d
1
r_(1)(d_(1))+f_(2)(x_(1)-d_(1))=7d_(1)+2+f_(2)(6-d_(1)) r_{1}\left(d_{1}\right)+f_{2}\left(x_{1}-d_{1}\right)=7 d_{1}+2+f_{2}\left(6-d_{1}\right) r 1 ( d 1 ) + f 2 ( x 1 − d 1 ) = 7 d 1 + 2 + f 2 ( 6 − d 1 )
x
1
x
1
x_(1) x_{1} x 1
d
1
=
0
d
1
=
0
d_(1)=0 d_{1}=0 d 1 = 0
d
1
=
1
d
1
=
1
d_(1)=1 d_{1}=1 d 1 = 1
d
1
=
2
d
1
=
2
d_(1)=2 d_{1}=2 d 1 = 2
d
1
=
3
d
1
=
3
d_(1)=3 d_{1}=3 d 1 = 3
d
1
=
4
d
1
=
4
d_(1)=4 d_{1}=4 d 1 = 4
d
1
=
5
d
1
=
5
d_(1)=5 d_{1}=5 d 1 = 5
d
1
=
6
d
1
=
6
d_(1)=6 d_{1}=6 d 1 = 6
f
1
(
x
1
)
f
1
x
1
f_(1)(x_(1)) f_{1}\left(x_{1}\right) f 1 ( x 1 )
d
1
∗
d
1
∗
d_(1)^(**) d_{1}^{*} d 1 ∗
6
35
40
43
46
49
47
44
49
4
Table 3: r_(1)(d_(1))+f_(2)(x_(1)-d_(1))=7d_(1)+2+f_(2)(6-d_(1))
x_(1) d_(1)=0 d_(1)=1 d_(1)=2 d_(1)=3 d_(1)=4 d_(1)=5 d_(1)=6 f_(1)(x_(1)) d_(1)^(**)
6 35 40 43 46 49 47 44 49 4 | Table 3: $r_{1}\left(d_{1}\right)+f_{2}\left(x_{1}-d_{1}\right)=7 d_{1}+2+f_{2}\left(6-d_{1}\right)$ | | | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| $x_{1}$ | $d_{1}=0$ | $d_{1}=1$ | $d_{1}=2$ | $d_{1}=3$ | $d_{1}=4$ | $d_{1}=5$ | $d_{1}=6$ | $f_{1}\left(x_{1}\right)$ | $d_{1}^{*}$ |
| 6 | 35 | 40 | 43 | 46 | 49 | 47 | 44 | 49 | 4 |